223 research outputs found

    The earlier the better: a theory of timed actor interfaces

    Get PDF
    Programming embedded and cyber-physical systems requires attention not only to functional behavior and correctness, but also to non-functional aspects and specifically timing and performance constraints. A structured, compositional, model-based approach based on stepwise refinement and abstraction techniques can support the development process, increase its quality and reduce development time through automation of synthesis, analysis or verification. For this purpose, we introduce in this paper a general theory of timed actor interfaces. Our theory supports a notion of refinement that is based on the principle of worst-case design that permeates the world of performance-critical systems. This is in contrast with the classical behavioral and functional refinements based on restricting or enlarging sets of behaviors. An important feature of our refinement is that it allows time-deterministic abstractions to be made of time-non-deterministic systems, improving efficiency and reducing complexity of formal analysis. We also show how our theory relates to, and can be used to reconcile a number of existing time and performance models and how their established theories can be exploited to represent and analyze interface specifications and refinement steps.\u

    Parameterized Dataflow Scenarios

    Full text link

    Model-driven quality and resource management for CPSs

    Get PDF
    A Cyber-Physical System (CPS) integrates cyber systems, human users, networks and physical systems. Thus, a CPS needs visual context and awareness to make autonomous and correct decisions. Advanced image and video processing is computationally intensive and challenging. Moreover, a CPS comprises increasingly complex and distributed configurations, which is reflectedin the growing number of sensors, actuators and other smart devices. This leadsto an exponential number of dynamic system configurations. To make mattersworse, a CPS needs to simultaneously satisfy many rigorous constraints, e.g.,hard deadlines, safety, quality, and performance. Hence, the system designeris confronted with an immense number of potential configurations of which anumber meet the constraints and only a fraction are optimal regarding certainqualities. This makes finding the optimal configurations hard, especially duringrun-time. A domain-specific language (DSL) for quality and resource managment (QRM) is presented to specify these configurations conveniently and reasonabout them in an automated manner

    Simultaneous Budget and Buffer Size Computation for Throughput-Constrained Task Graphs

    Get PDF
    Modern embedded multimedia systems process multiple concurrent streams of data processing jobs. Streams often have throughput requirements. These jobs are implemented on a multiprocessor system as a task graph. Tasks communicate data over buffers, where tasks wait on sufficient space in output buffers before producing their data. For cost reasons, jobs share resources. Because jobs can share resources with other jobs that include tasks with data-dependent execution rates, we assume run-time scheduling on shared resources. Budget schedulers are applied, because they guarantee a minimum budget in a maximum replenishment interval. Both the buffer sizes as well as the budgets influence the temporal behaviour of a job. Interestingly, a trade-off exists: a larger buffer size can allow for a smaller budget while still meeting the throughput requirement. This work is the first to address the simultaneous computation of budget and buffer sizes. We solve this non-linear problem by formulating it as a second-order cone program. We present tight approximations to obtain a non-integral second-order cone program that has polynomial complexity. Our experiments confirm the non-linear trade-off between budget and buffer sizes

    Interface Modeling for Quality and Resource Management

    Get PDF
    We develop an interface-modeling framework for quality and resource management that captures configurable working points of hardware and software components in terms of functionality, resource usage and provision, and quality indicators such as performance and energy consumption. We base these aspects on partially-ordered sets to capture quality levels, budget sizes, and functional compatibility. This makes the framework widely applicable and domain independent (although we aim for embedded and cyber-physical systems). The framework paves the way for dynamic (re-)configuration and multi-objective optimization of component-based systems for quality- and resource-management purposes

    Cluster-Based Partial-Order Reduction

    Full text link

    Worst-case Throughput Analysis for Parametric Rate and Parametric Actor Execution Time Scenario-Aware Dataflow Graphs

    Get PDF
    Scenario-aware dataflow (SADF) is a prominent tool for modeling and analysis of dynamic embedded dataflow applications. In SADF the application is represented as a finite collection of synchronous dataflow (SDF) graphs, each of which represents one possible application behaviour or scenario. A finite state machine (FSM) specifies the possible orders of scenario occurrences. The SADF model renders the tightest possible performance guarantees, but is limited by its finiteness. This means that from a practical point of view, it can only handle dynamic dataflow applications that are characterized by a reasonably sized set of possible behaviours or scenarios. In this paper we remove this limitation for a class of SADF graphs by means of SADF model parametrization in terms of graph port rates and actor execution times. First, we formally define the semantics of the model relevant for throughput analysis based on (max,+) linear system theory and (max,+) automata. Second, by generalizing some of the existing results, we give the algorithms for worst-case throughput analysis of parametric rate and parametric actor execution time acyclic SADF graphs with a fully connected, possibly infinite state transition system. Third, we demonstrate our approach on a few realistic applications from digital signal processing (DSP) domain mapped onto an embedded multi-processor architecture

    Parametrized dataflow scenarios

    Get PDF
    The FSM-based scenario-aware dataflow (FSM-SADF) model of computation has been introduced to facilitate the analysis of dynamic streaming applications. FSM-SADF interprets application's execution as an execution of a sequence of static modes of operation called scenarios. Each scenario is modeled using a synchronous dataflow (SDF) graph (SDFG), while a finite-state machine (FSM) is used to encode scenario occurrence patterns. However, FSM-SADF can precisely capture only those dynamic applications whose behaviors can be abstracted into a reasonably sized set of scenarios (coarse-grained dynamism). Nevertheless, in many cases, the application may exhibit thousands or even millions of behaviours (fine-grained dynamism). In this work, we generalize the concept of FSM-SADF to one that is able to model dynamic applications exhibiting fine-grained dynamism. We achieve this by applying parametrization to the FSM-SADF's base model, i.e. SDF, and defining scenarios over parametrized SDFGs. We refer to the extension as parametrized FSM-SADF (PFSM-SADF). Thereafter, we present a novel and a fully parametric analysis technique that allows us to derive tight worst-case performance (throughput and latency) guarantees for PFSM-SADF specifications. We evaluate our approach on a realistic case-study from the multimedia domain
    corecore